529 research outputs found

    NODALLY INTEGRATED FINITE ELEMENTS

    Get PDF
    Nodal integration of finite elements has been investigated recently. Compared with full integration it shows better convergence when applied to incompressible media, allows easier remeshing and highly reduces the number of material evaluation points thus improving efficiency. Furthermore, understanding it may help to create new integration schemes in meshless methods as well. The new integration technique requires a nodally averaged deformation gradient. For the tetrahedral element it is possible to formulate a nodal strain which passes the patch test. On the downside, it introduces non-physical low energy modes. Most of these "spurious modes" are local deformation maps of neighbouring elements. Present stabilization schemes rely on adding a stabilizing potential to the strain energy. The stabilization is discussed within this article. Its drawbacks are easily identified within numerical experiments: Nonlinear material laws are not well represented. Plastic strains may often be underestimated. Geometrically nonlinear stabilization greatly reduces computational efficiency. The article reinterpretes nodal integration in terms of imposing a nonconforming C0-continuous strain field on the structure. By doing so, the origins of the spurious modes are discussed and two methods are presented that solve this problem. First, a geometric constraint is formulated and solved using a mixed formulation of Hu-Washizu type. This assumption leads to a consistent representation of the strain energy while eliminating spurious modes. The solution is exact, but only of theoretical interest since it produces global support. Second, an integration scheme is presented that approximates the stabilization criterion. The latter leads to a highly efficient scheme. It can even be extended to other finite element types such as hexahedrals. Numerical efficiency, convergence behaviour and stability of the new method is validated using linear tetrahedral and hexahedral elements

    Verifikation Nicht-blockierender Datenstrukturen mit Manueller Speicherverwaltung

    Get PDF
    Verification of concurrent data structures is one of the most challenging tasks in software verification. The topic has received considerable attention over the course of the last decade. Nevertheless, human-driven techniques remain cumbersome and notoriously difficult while automated approaches suffer from limited applicability. This is particularly true in the absence of garbage collection. The intricacy of non-blocking manual memory management (manual memory reclamation) paired with the complexity of concurrent data structures has so far made automated verification prohibitive. We tackle the challenge of automated verification of non-blocking data structures which manually manage their memory. To that end, we contribute several insights that greatly simplify the verification task. The guiding theme of those simplifications are semantic reductions. We show that the verification of a data structure's complicated target semantics can be conducted in a simpler and smaller semantics which is more amenable to automatic techniques. Some of our reductions rely on good conduct properties of the data structure. The properties we use are derived from practice, for instance, by exploiting common programming patterns. Furthermore, we also show how to automatically check for those properties under the smaller semantics. The main contributions are: (i) A compositional verification approach that verifies the memory management and the data structure separately. (ii) A notion of weak ownership that applies when memory is reclaimed and reused, bridging the gap between garbage collection and manual memory management (iii) A notion of pointer races and harmful ABAs the absence of which ensures that the memory management does not influence the data structure, i.e., it behaves as if executed under garbage collection. Notably, we show that a check for pointer races and harmful ABAs only needs to consider executions where at most a single address is reused. (iv) A notion of strong pointer races the absence of which entails the absence of ordinary pointer races and harmful ABAs. We devise a highly-efficient type check for strong pointer races. After a successful type check, the actual verification can be performed under garbage collection using an off-the-shelf verifier. (v) Experimental evaluations of the aforementioned contributions. We are the first to fully automatically verify practical non-blocking data structures with manual memory management.Verifikation nebenläufiger Datenstrukturen ist eine der herausforderndsten Aufgaben der Programmverifikation. Trotz vieler Beiträge zu diesem Thema, bleiben die existierenden manuellen Techniken mühsam und kompliziert in der Anwendung. Auch automatisierte Verifikationsverfahren sind nur eingeschränkt anwendbar. Diese Schwächen sind besonders ausgeprägt, wenn sich Programme nicht auf einen Garbage-Collector verlassen. Die Komplexität manueller Speicherverwaltung gepaart mit komplexen nicht-blockierenden Datenstrukturen macht die automatisierte Programmverifikation derzeit unmöglich. Diese Arbeit betrachtet die automatisierte Verifikation nicht-blockierender Datenstrukturen, welche ihren Speicher manuell verwalten. Dazu werden Konzepte vorgestellt, die die Verifikation stark vereinfachen. Das Leitmotiv dabei ist die semantische Reduktion, welche die Verifikation in einer leichteren Semantik erlaubt, ohne die eigentliche komplexere Semantik zu betrachten. Einige dieser Reduktion beruhen auf einem Wohlverhalten des zu verifizierenden Programms. Dabei wird das Wohlverhalten mit Bezug auf praxisnahe Eigenschaften definiert, wie sie z.B. von gängigen Programmiermustern vorgegeben werden. Ferner wird gezeigt, dass die Wohlverhaltenseigenschaften ebenfalls unter der einfacheren Semantik nachgewiesen werden können. Die Hauptresultate der vorliegenden Arbeit sind die Folgenden: (i) Ein kompositioneller Verifikationsansatz, welcher Speicherverwaltung und Datenstruktur getrennt verifiziert. (ii) Ein Begriff des Weak-Ownership, welcher selbst dann Anwendung findet, wenn Speicher wiederverwendet wird. (iii) Ein Begriff des Pointer-Race und des Harmful-ABA, deren Abwesenheit garantiert, dass die Speicherverwaltung keinen Einfluss auf die Datenstruktur ausübt und somit unter der Annahme von Garbage-Collection verifiziert werden kann. Bemerkenswerterweise genügt es diese Abwesenheit unter Reallokation nur einer fixex Speicherzelle zu prüfen. (iv) Ein Begriff des Strong-Pointer-Race, dessen Abwesenheit sowohl Pointer-Races als auch Harmful-ABA ausschließt. Um ein Programm auf Strong-Pointer-Races zu prüfen, präsentieren wir ein Typsystem. Ein erfolgreicher Typcheck erlaubt die tatsächlich zu überprüfende Eigenschaft unter der Annahme eines Garbage-Collectors nachzuweisen. (v) Experimentelle Evaluationen. Die vorgestellten Techniken sind die Ersten, die nicht-blockierende Datenstrukturen mit gängigen Speicherverwaltungen vollständig automatisch verifizieren können

    Pointer Race Freedom

    Full text link
    We propose a novel notion of pointer race for concurrent programs manipulating a shared heap. A pointer race is an access to a memory address which was freed, and it is out of the accessor's control whether or not the cell has been re-allocated. We establish two results. (1) Under the assumption of pointer race freedom, it is sound to verify a program running under explicit memory management as if it was running with garbage collection. (2) Even the requirement of pointer race freedom itself can be verified under the garbage-collected semantics. We then prove analogues of the theorems for a stronger notion of pointer race needed to cope with performance-critical code purposely using racy comparisons and even racy dereferences of pointers. As a practical contribution, we apply our results to optimize a thread-modular analysis under explicit memory management. Our experiments confirm a speed-up of up to two orders of magnitude

    Asynchronous variational integration using continuous assumed gradient elements

    Get PDF
    AbstractAsynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step

    Distance fields on unstructured grids: Stable interpolation, assumed gradients, collision detection and gap function

    Get PDF
    AbstractThis article presents a novel approach to collision detection based on distance fields. A novel interpolation ensures stability of the distances in the vicinity of complex geometries. An assumed gradient formulation is introduced leading to a C1-continuous distance function. The gap function is re-expressed allowing penalty and Lagrange multiplier formulations. The article introduces a node-to-element integration for first order elements, but also discusses signed distances, partial updates, intermediate surfaces, mortar methods and higher order elements. The algorithm is fast, simple and robust for complex geometries and self contact. The computed tractions conserve linear and angular momentum even in infeasible contact. Numerical examples illustrate the new algorithm in three dimensions

    A Systematic Comparison of Music Similarity Adaptation Approaches

    Get PDF
    In order to support individual user perspectives and different retrieval tasks, music similarity can no longer be considered as a static element of Music Information Retrieval (MIR) systems. Various approaches have been proposed recently that allow dynamic adaptation of music similarity measures. This paper provides a systematic comparison of algorithms for metric learning and higher-level facet distance weighting on the MagnaTagATune dataset. A crossvalidation variant taking into account clip availability is presented. Applied on user generated similarity data, its effect on adaptation performance is analyzed. Special attention is paid to the amount of training data necessary for making similarity predictions on unknown data, the number of model parameters and the amount of information available about the music itself. 1

    Context-Aware Separation Logic

    Full text link
    Separation logic is often praised for its ability to closely mimic the locality of state updates when reasoning about them at the level of assertions. The prover only needs to concern themselves with the footprint of the computation at hand, i.e., the part of the state that is actually being accessed and manipulated. Modern concurrent separation logics lift this local reasoning principle from the physical state to abstract ghost state. For instance, these logics allow one to abstract the state of a fine-grained concurrent data structure by a predicate that provides a client the illusion of atomic access to the underlying state. However, these abstractions inadvertently increase the footprint of a computation: when reasoning about a local low-level state update, one needs to account for its effect on the abstraction, which encompasses a possibly unbounded portion of the low-level state. Often this gives the reasoning a global character. We present context-aware separation logic (CASL) to provide new opportunities for local reasoning in the presence of rich ghost state abstractions. CASL introduces the notion of a context of a computation, the part of the concrete state that is only affected on the abstract level. Contexts give rise to a new proof rule that allows one to reduce the footprint by the context, provided the computation preserves the context as an invariant. The context rule complements the frame rule of separation logic by enabling more local reasoning in cases where the predicate to be framed is known in advance. We instantiate our developed theory for the flow framework, which enables local reasoning about global properties of heap graphs. We then use the instantiation to obtain a fully local proof of functional correctness for a sequential binary search tree implementation that is inspired by fine-grained concurrent search structures

    Say Hello to Your New Automated Tutor – A Structured Literature Review on Pedagogical Conversational Agents

    Get PDF
    In this paper, we present the current state of the art of using conversational agents for educational purposes. These so-called pedagogical conversational agents are a specialized type of e-learning and intelligent tutoring systems. The main difference to traditional e-learning and intelligent tutoring systems is that they interact with learners using natural language dialogs, e.g. in the form of chatbots. For the sake of our research project, we analyzed current trends in the research stream as well as research gaps. Our results show for instance that (1) there is a trend towards using mobile conversational agents in education, (2) a proper generalization of existing research results (e.g. design knowledge) is missing, and (3) there is a need for comprehensive in-depth evaluation studies and corresponding process models. Based on our results, we outline a research agenda for future research studies

    Embedding Hindsight Reasoning in Separation Logic

    Full text link
    Proving linearizability of concurrent data structures remains a key challenge for verification. We present temporal interpolation as a new proof principle to conduct such proofs using hindsight arguments within concurrent separation logic. Temporal reasoning offers an easy-to-use alternative to prophecy variables and has the advantage of structuring proofs into easy-to-discharge hypotheses. To hindsight theory, our work brings the formal rigor and proof machinery of concurrent program logics. We substantiate the usefulness of our development by verifying the linearizability of the Logical Ordering (LO-)tree and RDCSS. Both of these involve complex proof arguments due to future-dependent linearization points. The LO-tree additionally features complex structure overlays. Our proof of the LO-tree is the first formal proof of this data structure. Interestingly, our formalization revealed an unknown bug and an existing informal proof as erroneous

    Beliefs about others: A striking example of information neglect

    Full text link
    In many games of imperfect information, players can make Bayesian inferences about other players’ types based on the information that is contained in their own type. Several behavioral theories of belief-updating even start from the assumption that players project their own type onto others also when it is not rational. We investigate such inferences in a simple environment that is a vital ingredient of numerous game-theoretic models and experiments, in which types are drawn from one out of two states of the world and participants have to guess the type of another participant. We find little evidence for irrational over-projection. Instead, between 50% and 70% of the participants in our experiment completely neglect the information contained in their own type and base their beliefs and choices only on the prior probabilities. Using several experimental interventions, we show that this striking neglect of information is very robust
    • …
    corecore